12 research outputs found

    Automated Filtering of Eye Movements Using Dynamic AOI in Multiple Granularity Levels

    Get PDF
    Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements

    Analysis of Reading Patterns of Scientific Literature Using Eye-Tracking Measures

    Get PDF
    Scientific literature is crucial for researchers to inspire novel research ideas and find solutions to various problems. This study presents a reading task for novice researchers using eye-tracking measures. The study focused on the scan paths, fixation, and pupil dilation frequency of the participants. In this study, 3 participants were asked to read a pre-selected research paper while wearing an eye-tracking device (PupilLabs Core 200Hz). We specified sections of the research paper as areas of interest (title, abstract, motivation, methodology, conclusion)to analyze the eye-movements. Then we extracted eye-movements data from the recordings and processed them using an eye-movement processing pipeline. To analyze how the eye-movements change throughout the reading task, we calculated fixation counts, fixation duration, and index of pupillary activity (IPA) for each participant. IPA is calculated using the pupil diameter and low IPA reflects low cognitive load, whereas high IPA reflects strong cognitive load. When analyzing scan paths, we observed that all participants started reading from the title section of the paper. Following this, no two participants followed the same scan path when reading the paper. Also, the average fixation counts and duration suggested that participants preferred to fixate more on the methodology section and spent more time reading it compared to the other sections. Moreover, the IPA of participants was higher when reading the title section, indicating higher cognitive demand prior to exploring the research idea presented in the paper. The least IPA was observed in the methodology section, indicating a lower cognitive load. The purpose of this study was to analyze the scan paths of novice researchers while reading a research paper. We observed different scan paths among participants, and a higher fixation count and duration when reading the methodology section, with a comparatively low cognitive load.https://digitalcommons.odu.edu/gradposters2021_sciences/1001/thumbnail.jp

    ADHD Prediction Through Analysis of Eye Movements With Graph Convolution Network

    Get PDF
    Processing speech with background noise requires appropriate parsing of the distorted auditory signal, fundamental language abilities as well as higher signal-to-noise ratio. Adolescents with ADHD have difficulty processing speech with background noise due to reduced inhibitory control and working memory capacity. In this study we utilize Audiovisual Speech-In-Noise performance and eye-tracking measures of young adults with ADHD compared to age-matched controls, and generate graphs for ADHD evaluation using the eye-tracking data. We form graphs utilizing the eight eye-tracking features (fixation count, average, total, and standard deviation of fixation duration, max and min saccade peak velocity, min, average, and standard deviation of saccade amplitude), and connection among trials in terms of subject, background noise, and sentence. We created multiple un-directed multi- graphs, each with 830 nodes which corresponds to a trial. Each trial is defined by a participant, background noise-level, and the sentence the participant was presented. For instance, k th node has information of {‘background noise level’: x, ‘sentence’: i, ‘subject’: j }. For each node, we create a feature matrix utilizing aforementioned eight eye gaze metrics. Links between pair of nodes mean that they belong to the same edge category. We introduced different types of edge categories: Same Background Noise Level, Same Subject, Same Sentence, Same Background Noise Level and Same Subject, Same Subject and Same Sentence, and Same Background Noise Level and Same Sentence. In our Graph Convolutional Network (GCN) model, we use node embedding and adjacency matrix representation as the input. The GCN layer is a multiplication of inputs, weights, and the normalized adjacency matrix. From the results we observed that only “Same Background Noise Level and Same Subject” edge category was able to give slightly better results in terms of AUC ROC and Precision. Additionally, we visualized what the model has learned by accessing the embeddings before the classification layer.https://digitalcommons.odu.edu/gradposters2023_sciences/1023/thumbnail.jp

    Predicting ADHD Using Eye Gaze Metrics Indexing Working Memory Capacity

    Get PDF
    ADHD is being recognized as a diagnosis that persists into adulthood impacting educational and economic outcomes. There is an increased need to accurately diagnose this population through the development of reliable and valid outcome measures reflecting core diagnostic criteria. For example, adults with ADHD have reduced working memory capacity (WMC) when compared to their peers. A reduction in WMC indicates attention control deficits which align with many symptoms outlined on behavioral checklists used to diagnose ADHD. Using computational methods, such as machine learning, to generate a relationship between ADHD and measures of WMC would be useful to advancing our understanding and treatment of ADHD in adults. This chapter will outline a feasibility study in which eye tracking was used to measure eye gaze metrics during a WMC task for adults with and without ADHD and machine learning algorithms were applied to generate a feature set unique to the ADHD diagnosis. The chapter will summarize the purpose, methods, results, and impact of this study

    Audiovisual Speech-In-Noise (SIN) Performance of Young Adults with ADHD

    Full text link
    Adolescents with Attention-deficit/hyperactivity disorder (ADHD) have difficulty processing speech with background noise due to reduced inhibitory control and working memory capacity (WMC). This paper presents a pilot study of an audiovisual Speech-In-Noise (SIN) task for young adults with ADHD compared to age-matched controls using eye-tracking measures. The audiovisual SIN task consists of varying six levels of background babble, accompanied by visual cues. A significant difference between ADHD and neurotypical (NT) groups was observed at 15 dB signal-to-noise ratio (SNR). These results contribute to the literature of young adults with ADHD.Comment: To be published in Symposium on Eye Tracking Research and Applications (ETRA '20 Short Papers), 6 pages, 3 figures, 2 table

    Objective Measure of Working Memory Capacity Using Eye Movements

    Get PDF
    Human-autonomy teaming (HAT) has become an important area of research due to the autonomous systems being developed for different applications, such as remotely controlled aircraft. Many remotely controlled vehicles will be controlled by automated systems, with a human monitor that may be monitoring multiple vehicles simultaneously. The attention and working memory capacity of operators of remote-controlled vehicles must be maintained at appropriate levels during operation. However, there is currently no direct method of determining working memory capacity, which is important because it is a measure for how memory is being stored for a short term and interacting with long term memory with a capacity limit that is dependent on attention and other executive functions. This study uses machine learning algorithms to find an objective relationship between participant eye tracking measurements and their responses on the NASATLX which determines subjective workload. The dataset used in this study was collected and published by researchers at the University of Windsor and publicly available

    Eye Movement and Pupil Measures: A Review

    Get PDF
    Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions

    Toward a Real-Time Index of Pupillary Activity as an Indicator of Cognitive Load

    Get PDF
    The Low/High Index of Pupillary Activity (LHIPA), an eye-tracked measure of pupil diameter oscillation, is redesigned and implemented to function in real-time. The novel Real-time IPA (RIPA) is shown to discriminate cognitive load in re-streamed data from earlier experiments. Rationale for the RIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper\u27s contribution is drawn from provision of documentation of the calculation of the RIPA. As with the LHIPA, it is possible for researchers to apply this metric to their own experiments where a measure of cognitive load is of interest
    corecore